Purpose
Bibliometric indicators (metrics associated with publications) are used for purposes such as research assessment, measuring impact and institutional rankings. They may affect important decisions like hiring, promotion or resource allocation. But they can be misused. Therefore, it is crucial to understand the different types of bibliometric indicators and their limitations, to better promote fair, transparent and robust research evaluation processes. This will help the Imperial research community to uphold Imperial’s five core values of
- integrity (implementing transparent and responsible use of metrics to ensure research integrity),
- collaboration (fostering an inclusive research culture for everyone),
- respect (respecting diverse research contributions),
- innovation (drawing appropriate insights from research evaluations to set the right direction),
- excellence (improving the overall quality of open research practices to create an excellent research culture).
Understanding bibliometrics and its appropriate use also helps you align with the principles of initiatives like the San Francisco Declaration on Research Assessment (DORA). By adopting responsible use of metrics beyond the journal impact factor and h-index, you will start to foster a positive research culture that recognises your efforts to embed transparency, equity, accessibility and reliability into your research rather than a reliance on the volume of journal articles.
These webpages equip you with the confidence and know-how to use metrics responsibly. They explain what the field of bibliometrics is, the types of metrics and indicators, and the tools available at Imperial to help you understand the visibility, reach and impact of your publications beyond inappropriate measures like journal-based numbers.
For more information, contact bibliometrics@imperial.ac.uk.
What is bibliometrics?
Bibliometrics is the quantitative study of research outputs and can be applied to any type of research publication, author, or institution. The terms ‘bibliometrics’, ‘quantitative data’, ‘indicators’, or ‘metrics’ are usually used interchangeably for the context.
Traditionally, it refers to the number of citations for a research output. However, thanks to emerging technology in data collection, other quantitative data can be measured such as number of downloads, page views, social media or policy document mentions which are also known as alternative metrics or altmetrics (not to be confused with Altmetric).
Bibliometrics is traditionally used to measure research performance or demonstrate impact. In different contexts, the metrics are sometimes used alongside expert peer review to assess publications. It is also useful to understand publication, citation, and collaboration trends in academia.
Responsible use of metrics
Metrics are increasingly used in academia, and they are important in some cases. However, inappropriate use may result in unintended consequences that would degrade trust in scholarship. All metrics have limitations and biases. Therefore, they should always be used responsibly by all parts of the research infrastructure and complement not supplant expert reviews in assessing research.
Find out more about responsible metrics
Imperial’s position
Imperial is committed to ensuring that our procedures for assessing the achievements of all staff are fair, transparent and robust and in 2015 undertook the Richardson review (PDF), investigating the use of bibliometrics in staff performance assessment. A formal commitment to not considering journal-based metrics (such as Journal Impact Factor) in recruitment and promotion decisions has been in place since 2017, when Imperial became a signatory of the San Francisco Declaration on Research Assessment (DORA). This has implications for the way that bibliometrics are used to evaluate research outputs at Imperial.
The Richardson review also acknowledges that the concept of responsible metrics must align with the five principles mentioned in ‘The Metric Tide: Review of Metrics in Research Assessment’, a report published by Research England (robustness – basing metrics on the best possible data, humility – recognising that quantitative indicators should support not supplant qualitative assessment, transparency – keeping data collection and analysis processes transparent, diversity – reflecting a diversity of research and researcher career paths, and reflexivity – recognising the systemic effect of indicators and updating them where possible).
These frameworks mean that all decision-making processes requiring assessment of research performance (e.g. recruitment, promotion or allocation of funding) should avoid the use of journal-based metrics, consider the limitations of metrics, and be aligned with DORA and the Metric Tide report’s principles. Promoting and implementing responsible use of metrics will also uphold Imperial Values of Respect, Collaboration, Excellence, Integrity, and Innovation.